Search results for "Ensemble methods"
showing 6 items of 6 documents
Multi-layer intrusion detection system with ExtraTrees feature selection, extreme learning machine ensemble, and softmax aggregation
2019
Abstract Recent advances in intrusion detection systems based on machine learning have indeed outperformed other techniques, but struggle with detecting multiple classes of attacks with high accuracy. We propose a method that works in three stages. First, the ExtraTrees classifier is used to select relevant features for each type of attack individually for each (ELM). Then, an ensemble of ELMs is used to detect each type of attack separately. Finally, the results of all ELMs are combined using a softmax layer to refine the results and increase the accuracy further. The intuition behind our system is that multi-class classification is quite difficult compared to binary classification. So, we…
Evaluation of Ensemble Machine Learning Methods in Mobile Threat Detection
2017
The rapid growing trend of mobile devices continues to soar causing massive increase in cyber security threats. Most pervasive threats include ransom-ware, banking malware, premium SMS fraud. The solitary hackers use tailored techniques to avoid detection by the traditional antivirus. The emerging need is to detect these threats by any flow-based network solution. Therefore, we propose and evaluate a network based model which uses ensemble Machine Learning (ML) methods in order to identify the mobile threats, by analyzing the network flows of the malware communication. The ensemble ML methods not only protect over-fitting of the model but also cope with the issues related to the changing be…
Ensemble methods for item-weighted label ranking: a comparison
2022
Label Ranking (LR), an emerging non-standard supervised classification problem, aims at training preference models that order a finite set of labels based on a set of predictor features. Traditional LR models regard all labels as equally important. However, in many cases, failing to predict the ranking position of a highly relevant label can be considered more severe than failing to predict a trivial one. Moreover, an efficient LR classifier should be able to take into account the similarity between the items to be ranked. Indeed, swapping two similar elements should be less penalized than swapping two dissimilar ones. The contribution of the present paper is to formulate more flexible item…
Comparing Boosting and Bagging for Decision Trees of Rankings
2021
AbstractDecision tree learning is among the most popular and most traditional families of machine learning algorithms. While these techniques excel in being quite intuitive and interpretable, they also suffer from instability: small perturbations in the training data may result in big changes in the predictions. The so-called ensemble methods combine the output of multiple trees, which makes the decision more reliable and stable. They have been primarily applied to numeric prediction problems and to classification tasks. In the last years, some attempts to extend the ensemble methods to ordinal data can be found in the literature, but no concrete methodology has been provided for preference…
Boosting for ranking data: an extension to item weighting
2021
Gli alberi decisionali sono una tecnica predittiva di machine learning particolarmente diffusa, utilizzata per prevedere delle variabili discrete (classificazione) o continue (regressione). Gli algoritmi alla base di queste tecniche sono intuitivi e interpretabili, ma anche instabili. Infatti, per rendere la classificazione più affidabile si `e soliti combinare l’output di più alberi. In letteratura, sono stati proposti diversi approcci per classificare ranking data attraverso gli alberi decisionali, ma nessuno di questi tiene conto ne dell’importanza, ne delle somiglianza dei singoli elementi di ogni ranking. L’obiettivo di questo articolo `e di proporre un’estensione ponderata del metodo …
ENSEMBLE METHODS FOR RANKING DATA
2017
The last years have seen a remarkable flowering of works about the use of decision trees for ranking data. As a matter of fact, decision trees are useful and intuitive, but they are very unstable: small perturbations bring big changes. This is the reason why it could be necessary to use more stable procedures, as ensemble methods, in order to find which predictors are able to explain the preference structure. In this work ensemble methods as BAGGING and Random Forest are proposed, from both a theoretical and computational point of view, for deriving classification trees when ranking data are observed. The advantages of these procedures are shown through an example on the SUSHI data set.